CS 264 : Beyond Worst - Case Analysis Lecture # 14 : Smoothed Analysis of Pareto Curves ∗
نویسنده
چکیده
Our next application of smoothed analysis is to Pareto curves. We begin by explaining one of several reasons why you should care about the size of Pareto curves: they govern the running time of some interesting algorithms. In the Knapsack problem, the input comprises n items, each with a positive value vi and a positive weight (or size) wi. Also given is a knapsack capacity W . The goal is choose a subset S ⊆ {1, 2, . . . , n} of the items that maximizes the total value ∑ i∈S vi, subject to the chosen items fitting in the knapsack (i.e., ∑ i∈S wi ≤ W ). When you were first introduced to this problem, you may have been told some hokey story about a burglar trying to escape with the best loot — but it really is a fundamental problem, relevant whenever you’re trying to make optimal use of a limited resource. We say that a subset S ⊆ {1, 2, . . . , n} dominates T ⊆ {1, 2, . . . , n} if: (i) the total value of S is at least that of T ( ∑ i∈S vi ≥ ∑ i∈T vi); (ii) the total weight of S is at most that of T ( ∑ i∈S wi ≤ ∑ i∈T wi); and (iii) at least one of these two inequalities is strict. If S dominates T then it renders T moot — the solution T can be safely pruned without regret. The Pareto curve of a Knapsack instance is the set of all undominated solutions; see Figure 1. Geometrically, a point is dominated if and only if there is another point “to the northwest” of it. You’ve seen this concept before, in Lecture #2, when we studied the 2D Maxima problem. The Pareto curve corresponds, after a reflection about the y-axis, to the set of maxima of the point set with one point for each subset of {1, 2, . . . , n}, the two coordinates of a point being the total weight and total value of S. Here is a Knapsack algorithm that you may not have seen before (due to [5]).
منابع مشابه
CS 264 : Beyond Worst - Case Analysis Lecture # 17 : Smoothed Analysis of Local Search ∗
This lecture and the next are about smoothed analysis, which is perhaps the most wellstudied “hybrid” analysis framework, blending average-case and worst-case analysis. (Recall our previous examples: semi-random graph models and the random-order model for online algorithms.) In smoothed analysis, an adversary first picks an input, and nature subsequently adds a “small” perturbation to it. This ...
متن کاملEvaluating FPTASes for the Multi-Objective Shortest Path Problem
We discuss three FPTASes for the Multi-Objective Shortest Path problem. Two of which are known in the literature, the third is a newly proposed FPTAS, aimed at exploiting small Pareto curves. The FPTASes are analyzed, empirically, based on worst case complexity, average case complexity and smoothed complexity. We also analyze the size of the Pareto curve under different conditions.
متن کاملCS 264 : Beyond Worst - Case Analysis Lecture # 20 : Algorithm - Specific Algorithm Selection ∗
A major theme of CS264 is to use theory to derive good guidance about which algorithm to use to solve a given problem in a given domain. For most problems, there is no “one size fits all” algorithm, and the right algorithm to use depends on the set of inputs relevant for the application. In today’s lecture, we’ll turn this theme into a well-defined mathematical problem, formalized via statistic...
متن کاملCS 264 : Beyond Worst - Case Analysis Lecture # 19 : Self - Improving Algorithms ∗
The last several lectures discussed several interpolations between worst-case analysis and average-case analysis designed to identify robust algorithms in the face of strong impossibility results for worst-case guarantees. This lecture gives another analysis framework that blends aspects of worstand average-case analysis. In today’s model of self-improving algorithms, an adversary picks an inpu...
متن کاملCS 264 : Beyond Worst - Case Analysis Lecture
The last few lectures discussed several interpolations between worst-case and average-case analysis designed to identify robust algorithms in the face of strong impossibility results for worst-case guarantees. This lecture gives another analysis framework that blends aspects of worstand average-case analysis. In today’s model of self-improving algorithms, an adversary picks an input distributio...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014